A Stochastic Optimization Approach for Unsupervised Kernel Regression
نویسندگان
چکیده
Unsupervised kernel regression (UKR), the unsupervised counterpart of the Nadaraya-Watson estimator, is a dimension reduction technique for learning of low-dimensional manifolds. It is based on optimizing representative low-dimensional latent variables with regard to the data space reconstruction error. The problem of scaling initial local linear embedding solutions, and optimization in latent space is a continuous multi-modal optimization problem. In this paper we employ evolutionary approaches for the optimization of the UKR model. Based on local linear embedding solutions, the stochastic search techniques are used to optimize the UKR model. An experimental study compares covariance matrix adaptation evolution strategies to an iterated local search evolution strategy.
منابع مشابه
Evolutionary Unsupervised Kernel Regression
Dimension reduction and manifold learning play an important role in robotics, multimedia processing and data mining. For these tasks strong methods like Unsupervised Kernel Regression [4, 7] or Gaussian Process Latent Variable Models [5, 6] have been proposed in the last years. But many methods suffer from numerous local optima and crucial parameter dependencies. We use advanced methods from st...
متن کاملEvolutionary kernel density regression
The Nadaraya–Watson estimator, also known as kernel regression, is a density-based regression technique. It weights output values with the relative densities in input space. The density is measured with kernel functions that depend on bandwidth parameters. In this work we present an evolutionary bandwidth optimizer for kernel regression. The approach is based on a robust loss function, leave-on...
متن کاملCovariance Matrix Self-Adaptation and Kernel Regression - Perspectives of Evolutionary Optimization in Kernel Machines
Kernel based techniques have shown outstanding success in data mining and machine learning in the recent past. Many optimization problems of kernel based methods suffer from multiple local optima. Evolution strategies have grown to successful methods in non-convex optimization. This work shows how both areas can profit from each other. We investigate the application of evolution strategies to N...
متن کاملStochastic Low-Rank Kernel Learning for Regression
We present a novel approach to learn a kernelbased regression function. It is based on the use of conical combinations of data-based parameterized kernels and on a new stochastic convex optimization procedure of which we establish convergence guarantees. The overall learning procedure has the nice properties that a) the learned conical combination is automatically designed to perform the regres...
متن کاملDecentralized Online Learning with Kernels
We consider multi-agent stochastic optimization problems over reproducing kernel Hilbert spaces (RKHS). In this setting, a network of interconnected agents aims to learn decision functions, i.e., nonlinear statistical models, that are optimal in terms of a global convex functional that aggregates data across the network, with only access to locally and sequentially observed samples. We propose ...
متن کامل